This post is another summary that mostly ties together various items I’ve covered separately, but also adds some new material.
We know that the exit polls had an average error favoring John Kerry of 1.9% per precinct. What explains the error? There are many theories that are, in my view, far more plausible than the notion of widespread fraud or problems with the count. Unfortunately, we lack definitive proof of a cause, both because NEP has not released any of its internal analysis and because such proof is often elusive. Let’s review what we do know.
Warren Mitofsky and Joe Lenski , the researchers that conducted the exit polls on behalf of the National Election Pool (NEP), have so far been circumspect in there public comments. They have offered theories, but stopped short of claiming definitive proof for any explanation of the error favoring Kerry. Here is a sampling of their on-the-record speculation:
Kerry was ahead in a number of the — in a number of the states by margins that looked unreasonable to us. And we suspect that the reason, the main reason, was that the Kerry voters were more anxious to participate in our exit polls than the Bush voters…in an exit poll, everybody doesn’t agree to be interviewed. It’s voluntary, and the people refuse usually at about the same rate, regardless of who they support. When you have a very energized electorate, which contributed to the big turnout, sometimes the supporters of one candidate refuse at a greater rate than the supporters of the other candidate. (Warren Mitofsky on The News Hour, November 5, 2004)
In addition, some inquiry into what went wrong with the exit polls is also necessary. Thankfully, Lenski told me that such a probe is currently underway; there are many theories for why the polls might have skewed toward Kerry, Lenski said, but he’s not ready to conclude anything just yet. At some point, though, he said we’ll be able to find out what happened, and what the polls actually said. (Farhood Manjoo for Salon.com, November 12, 2004)
One thing [Warren Mitofsky] confirmed to me is that the average deviation to Kerry in the completed version of the exit poll is estimated at +1.9%. When asked if the full 1.9% deviation could be explained by non-response bias (Kerry voters being more likely to complete the exit poll than Bush voters), he said, "It’s my opinion, but I can’t prove it." He went on to say that it would be an impossible thing to "prove" categorically because there exist an infinite number of variables that could have a micro-impact on the exit poll which could combine for a statistically significant impact. These factors ranged from the weather to the distance from the polling place some of his poll takers were forced to stand. He is also trying to determine whether there is a statistically significant correlation between certain types of precincts and the non-response deviation. Again, right now he feels the most reasonable and logical explanation of the average 1.9% deviation for Kerry was non-response bias. (Blogger Chris Johnson of Mayflower Hill, November 17, 2004).
Whether the internal NEP analysis ever sees the light of day remains an open question. The networks have obviously resisted public disclosure to date. Four years ago, a Congressional investigation into the election night snafus helped motivate several networks to release internal reports. That pressure is lacking this year, so we will have to wait and see.
Until then, we can make some educated guesses about the analysis they have done or are doing. First, the NEP analysts can examine several potential sources of error to see if they contributed to any systematic bias for Kerry. By examining actual vote returns they can identify errors in:
- The random samples of precincts
- The hard counts of turnout obtained by interviewers
- Data entry or tabulation
- Telephone surveys of absentee voters (in 13 states listed here)
- Absentee voting not covered by the exit polls (in 37 states and DC)
While all of these factors could have introduced error, with the possible exception of absentee ballots, it is hard to imagine how any could have contributed to a systematic bias favoring Kerry. Moreover, these problems are relatively easy to identify once the full count is available. As such, I’m assuming that if any of these factors could explain the errors favoring Kerry, we would have heard about it already.
The second step is to look at the error that remains, something the analysts refer to as "within precinct error." The most likely culprit is some combination of response and coverage error. Response refers to randomly selected voters who did not participate in the survey; coverage refers to votes refers to voters who were not included in the sample because the exited the polls when the interviewer was away or did not pass the interviewer when the exited.
The good news is that the exit pollsters have more tools as their disposal to help study response and coverage error than other survey researchers. Since interviewers are face to face with potential respondents, they keep a tally on the gender, race and approximate age of refusals. Most important, the NEP analysts can calculate the difference between the poll and the actual count within each precinct. They know quite a bit about each precinct: how it voted, the type of voting equipment used at the polling place, the number of exit doors at the polling place, whether the polling place officials were cooperative and how far the interviewer had to stand from the exit. They also know the age, gender and level of experience of the interviewer at each precinct. They can use all of these characteristics and more to see if any tend to explain the error in Kerry’s favor.
The bad news is that it is difficult to say much about those voters that refused to be interviewed, because…well…they were not interviewed. If the NEP analysts are lucky, they will be able to draw some inferences from the characteristics of the precincts where the error was greatest, but otherwise, explanations may be elusive, even with all the data available.
Those of us not privy to the internal investigation are left to speculate about the most plausible theories. Here are some educated guesses, but keep in mind, this is just hypothesis:
Bush voters were more reluctant to be interviewed – As summarized in an earlier post, Republicans and conservatives have long reported less trust of the national media and, as such, may be slightly less likely to want to participate in the exit polls. The NEP interviewers and materials prominently display their big network sponsorship.
Kerry voters were more likely to volunteer to be interviewed – Exit polls have a potential "back door" that other surveys lack. On telephone surveys, a respondent cannot possibly volunteer to be interviewed. In an exit poll, the interviewer is supposed to pick every third or fifth exiting voter, but others may still approach and express an interest in being interviewed. Interviewers are instructed to deny such requests, but only training and diligence of the interviewers will prevent deviation from the sampling procedure.
This year’s NEP exit poll interviewers were trained via telephone and most worked for just one day without supervision. With roughly 50 interviews per precinct and a 50% response rate, it would only take an average of one non-random "volunteer" respondent favoring Kerry per precinct to create a 2% error.
Now consider: The Democratic National Committee waged an apparently successful campaign to get the 5 million members on its email lists to vote for Kerry in unscientific online polls. A CBS News report found that John Kerry ran "about 20 points better" in non-scientific online polls than in traditional, random sample surveys (thanks to alert reader BB for the link). Is it possible that some of this enthusiasm to respond to online surveys carried over to the Election Day and make some partisan Democrats more apt to volunteer to take the exit polls?
Bush voters were more likely to avoid exit pollsters who were forced to stand among electioneering partisans – Overzealous election officials who force exit pollsters to stand 100 feet or more from the polling place consistently present exit pollsters their biggest logistical challenge. At that distance the interviewers cannot cover all exiting voters, and worse, often get trapped standing among electioneering partisans – a gaggle most voters try to avoid.
Now consider that several news accounts suggest that Democratic campaigns and groups like ACT and Moveon.org put far greater emphasis on Election Day visibility than their Republican counterparts. Matt Bai’s post-election piece for the New York Times Sunday Magazine noted the puzzlement of Democratic organizers that their "field offices weren’t detecting any sign of Bush canvassers on the streets or at the polls." Is it possible that exit poll interviewers found themselves frequently standing among Democratic partisans that exiting Republican voters might want to avoid?
Again, all of this is just speculation, and definitive proof may be elusive even to those with access to the raw data. However, some combination of the above most likely caused the exit poll errors that favored Kerry.
By comparison, the alternative explanation for the exit poll "discrepancy" – widespread nationwide voter fraud – is wildly implausible. Consider the preliminary finding that Warren Mitofsky shared with blogger Chris Johnson:
One possibility [Mitofsky] was able to rule out, though, is touch screen voting machines that don’t leave any paper trail being used to defraud the election. To prove this, he broke down precincts based on the type of voting machine that was used and compared the voting returns from those precincts with his own exit polls. None of the precincts with touch screen computers that don’t leave paper trails, or any other type of machine for that matter, had vote returns that deviated from his exit poll numbers once the average 1.9% non-response bias was taken into account.
In other words, the size of the "discrepancy" between the exit polls and the vote did not vary by the type of voting equipment used at the precinct. Now if you believe there were problems in the count that were limited to a few counties or precincts, than the exit poll "discrepancy" has little relevance. Even if such problems occurred, the NEP exit polls lacked the statistical power to detect small errors within individual counties or precincts.
However, if you believe the error in the exit polls presents evidence of widespread fraud, you need to explain how such a fraud could have been committed consistently across all types of voting equipment and in all the battleground states. You would also have to reconcile that theory with New Hampshire, where the exit polls overstated John Kerry’s support by 5%, yet a Ralph Nader sponsored recount found no noteworthy discrepancies. A Nader spokesman concluded, "it looks like a pretty accurate count here in New Hampshire."
None of this makes much sense. The more plausible explanation is that a problem evident to some degree in the exit polls every year since 1990 – a problem most likely caused by some combination of response and coverage error — simply got worse.
Excellent and useful information. Thank you for the effort that went into compiling this!
Today, the Ohio recount was made official. OH has a law that requires a paper trail so the votes were recounted under supervision. Something like 300 votes off the original count.
http://story.news.yahoo.com/news?tmpl=story&ncid=703&e=1&u=/ap/20041228/ap_on_re_us/ohio_vote
One more theory to throw in your bag that I’ve mentioned before and actually was first posed by our friend and enigma, John Goodwin.
I’ll put my money on it that differential non-response will be the thrust of Edison/Mitofsky’s conclusions in their report to be issued in January.
Note the Mayflower Hill paraphraze of a statement atributed to Mitofsky above:
“He is also trying to determine whether there is a statistically significant correlation between certain types of precincts and the non-response deviation.”
Then consider your statement:
“Since interviewers are face to face with potential respondents, they keep a tally on the gender, race and approximate age of refusals.”
This is only useful if you “know” something about how these “characteristics” are normally correlated with voting behavior.
For example, what if a “characteristic” was highly clustered in certain precincts (say a precinct has a high concentration of Blacks, seniors, or Jewish people) and these precincts tallied an unusually high share of Bush votes when compared to what we “know” about how Blacks, Seniors, and Jewish people typically vote.
Assume now that you buy the hypothesis that due to social pressure in highly clustered precincts, Blacks, Seniors, and Jewish people who voted for Bush didn’t respond to (or lied on) the exit polls at a higher rate than Kerry voters with these same characteristics (I voted for Bush because I was scared to death of terrorists… (Or) I really don’t like gays and Bush is anti-gay rights…. (And) But what would my family and friends say if they ever found out? I better not take this exit poll, who knows who could find out???)
What do you get?
Two possible things: 1) the differential non-response (Bush voters not responding at the same rate as eagerly as Kerry voters) leads to Kerry bias in the exit poll; and 2) In highly clustered precincts, when/if the non-responses are “corrected” or “weighted” according to “what we know” about these characteristics, then we get even more bias in the exit poll (I don’t know if the data distributed to the networks at any point on election day was weighted to correct for non-response…).
When you consider a Z-test of the Freeman data for the Bush and Kerry proportions using the Margin of Error table provided by the NEP, you find:
1) significant discrepancies (Z-score>1.96) for Alabama, Delaware, Minnesota, New Hampshire, New York, North Carolina, Pennsylvania, Rhode Island, Vermont, and Florida;
2) the error was greatest (Z-score>2.35) in New Hampshire, New York, South Carolina, and Vermont.
[Please note that these Z-scores are not exact and have an error bound range. This only serves the purpose of identifying the states where the Kerry bias was the most severe.]
Now why would NH, NY, SC and VT show the greatest discrepancy? I suspect that as one looks at the precinct-level data, as Mayflower Hill indicated that Mitofsky is doing, there might be some correlations between non-response and highly clustered characteristics (perhaps even % Democrat would work rather than race, age, or religion).
Thanks again John Goodwin for raising this as a hypothesis. The more I look at it and the statements made by Mitofsky and others (including MP), the more I “suspect” that differential non-response threw the exit polls in 2004.
There is another fraud possibility in regards to the exit poll discrepancies which you have not considered. That is that someone involved with the exit polling deliberately tampered with the data ever so slightly so that it has the 1.9% error that you mentioned.
I made $150 betting on tradesports election option contracts (I bet on specific states and that bush would get at least 260 electoral votes). There are people on tradesports who bet amounts in the many thousands of dollars on the election. When the exit poll results were leaked on drudge and slate, as everyone knew they would be, the contracts on bush winning fell drastically, and the kerry options rose.
What if someone involved in the exit polling data, deliberately manipulated the data to affect the price of contracts on tradesports. Tradesports is an offshore (Ireland) betting site, so transaction details would not necessarily be available to US authorities. How do we know that no one involved with the exit polling has such a financial interest in the polling data?
This option is no more farfetched than widespread fraud in all of the states where the exit poll data did not match the election results.
I’m from Europe, I’m a total lay person on polls, but since November 2 I have read some 20000 articles and posts on election “irregularities”. Obivously, reading his opinion in the above post, Mitofsky is a lay person on election fraud. In my opinion there are two very simple possibilities in this discussion:
1. From 1988 onwards (see ealier post on your board) exit pollsters in the U.S. have become increasingly incompetent. Given the level of your experience, the level of funding involved and so on this is hihgly unlikely. Remember that exit polls in cilivized countries are extremely accurate.
2. The election polls were accurate in measuring voter intent, but the election process itself was not.
That the Ohio recount did not show any major differences does not mean much. Ballot stuffing, electronically or otherwise, means that the ballots will also be found in a recount. In, for instance, both Ohio and New Mexico, there are irrefutable cases of what has become known as “phantom votes” (more votes than voters in the voter logs or books). This illustrates that there is someting very wrong with the integrity of the election process.
Sjerp, very interesting. I’m a grad student and although I only have one peer-reviewed publication under my belt, I think I’ve learned how to research. Can you please direct me to published analysis that shows that “election polls were accurate in measuring voter intent, but the election process itself was not.”
I’ve searched entire catalogues of over 50 university libraries via my school’s inter-library loan system. Even searched all accessible European, Australian, and US journal article databases and have found little analysis (even reference) to foreign exit polls.
I’m writing a paper on the subject and I think I’ve developed a fairly exhaustive bibliography for my literature review. However, I don’t dare say my list is actually exhaustive so I’d greatly appreciate your help in this matter. Thanks.
Rick, thank you for your post. I have an MSc in chemistry and a PhD in sociology, but it has been 15 years since I have done any research. If I’m correct your post breaks down into two questions: how can an election not reflect voter intent, and how about exit polls in other countries.
As for the latter, and as I said in my first post, I’m not at all an expert at polling. As a person interested in politics I have always been amazed at the accuracy of exit polls presented on TV early on election nights here in Europe (e.g. Germany or Netherlands). I will dust off my language skills and look around German and French language websites and databases for you in the coming days. If I come up with something I’ll let you know here.
As for the first question, an “election process not reflecting voter intent” obviously refers to forms of bias or even outright fraud. If one had exit polled the Floridians who voted on the infamous butterfly ballot in 2004, one would have polled higher support for Gore than was actually counted in terms of votes. If one polls Kerry voters who voted on a provisional ballot in Ohio 2004, and subsequently their ballots are actually rejected (and these are major numbers) the exit poll will not voter intent. If an election machine defaults to Bush under some circumstances, where Kerry was intended (and there are quite a few reports that this happened), where the voter did not notice the flipped vote, voter intent as measured by an exit poll will differ from the actual count. Finally, there is the possibility of old-fashioned ballot box
stuffing, which – so it seems – was quite a bit enhanced in 2004 by the introduction or expansion of the option of absentee votes. Putting it bluntly, an exit poll would obviously not poll dead people voting and would be therefore be inaccurate in measuring the intent of alive voters.
Please recall that the first election in the Ukraine was fixed by means of absentee ballots. So far, to be fair, I’m not aware of reliable direct evidence of ballot box stuffing in e.g. Ohio, but there are indirect indications that this may have been going on: the phantom votes mentioned in my first post, and “Albanian” turnout figures in some counties or precincts (turnout close to 100%).
One does not need “massive fraud”, or truckloads of ballots, to arrive at an election result that does not reflect voter intent. Ohio, for instance, consists of around 11000 precincts, so with 5 wrongfully rejected provisional ballots for Kerry and 5 “inactive voters” voting anyway via absentee ballot (for Bush), for each precint, the election result can possibly have been flipped.
Anyway, you are probably aware of all of this. If not, one entrance into the world of election fraud data and theories 2004 would be http://www.freepress.org. Look past the activism and unlikely science fiction type conspiracy theories. The papers/work by Phillips, Lange and Stewart I find very informative, for starters.
Good luck with your research! If I find something on exit polls in Europe I will let you know.
I trained at the doctoral level in social psychology. I’m no methodologist by any stretch of imagination, but I don’t need to be. Being good at experimental design and the operation of statistical tools is not helpful here.
We’re not looking at a brute-force fraud, we’re looking at thousands of tiny frauds, each worth a few votes, and each disclaimable *in isolation* as simple error or ‘innocent’ bad behavior. It’s only when we step back and look at the overall pro-Bush valence of all those tiny ‘errors’ that, as with a pointilist painting, we see the image of the fraud.
Sjerp’s analysis is right on.
Thanks Sjerp, sorry my post wasn’t too clear. That’s what you get when trying to type with baby in hand! Please keep an eye out for published material on the subject (something that I can reference in a peer-reviewed work, not questionable conspiracy theory web-sites). Specifically, please keep an eye out for published material that supports your statement: “Remember that exit polls in cilivized countries are extremely accurate.”
On this point I have found absolutely NOTHING. In fact, what I’ve found (from newspaper articles) is that in fact often they are accurate in picking a winner, but not at all accurate in nailing the proportions of the vote for each candidate. Big difference!
I just read a very interesting analysis that suggested that exit polling was very accurate starting in the 1970s up through the introduction of electronic voting in the 1990s. Does anybody know if this is true (no Rick there won’t be any articles on it – yet – you don’t get published by writing articles on things that work only on anomolies and why they occur. If you are going to live in the world of academics you need to get used to this. As a matter of fact you would be much more likely to find articles on why exit polls didn’t work in certain cases. Are there any of those pre 1990s and how many are there.)
I can guarantee you there are a legion of sociologists and political scientists digging in to the topic right now – this is going to be a very hot topic and academics publishing is mercernary. But the type of articles we are looking for are developed at a glacial pace. Here is what I am asking pollsters then. Before the 1990s what was the view of exit polling. Was there anybody who questioned its accuracy and validity. Why did organizations keep spending an enormous amount of meny on it.
Here is the key point to all this. If nobody can come up with criticisms of exit polling pre the advent of electronic voting then what we have from a number of political types is a lie of ommission. They are massaging history to come up with answers they have already predetermined. A lie of ommission is much more serious than a lie of commission in this type of discussion.
One final point. Most academics I talk with feel there is something wrong and it should be investigated, if for historical reasons if nothing else. Many of these people don’t have a particular axe to grind. They just see a disturbing pattern and cannot understand why it is not being investigated. The only people arguing that this shouldn’t be pursued are politicals on both sides of the aisle. This kind of scares me.
The one, key factor that puzzles me most of all about the ‘honest election’ claims is that we already have solid evidence of fraud and successful election-stealing from the 2000 election. We know, because NORC confirmed it, that Gore got the most votes in Florida. We know that the GOP disenfranchised thousands of legal voters (the infamous list); we watched as GOP operatives reprised the Brownshirt role at an election office; we watched in stunned disbelief as the Five Felons handed the presidency to Bush in the dead of night. And those are just the highlights (or lowlights, possibly).
We watched in ’02 as several ‘surprise upsets’ took place, all favoring the GOP, several of them purportedly the results of *large* last-minute shifts in voter support.
And we have plenty of evidence this year of GOP chicanery, including, e.g., egregiously partisan placement of voting machines and colocating voting places such that ballots could easily be discarded as having been filed wrongly, or autocounted for the wrong candidate.
Yet despite that, and despite the ‘think of horses’ rule, and despite the total lack of any plausible explanation for the ever-increasing size of the fudge factor (‘design effect’)…we still have people claiming that there’s nothing to see here and it was a fair election.
I didn’t even vote for Kerry–I live in Mass, and despise him for the self-satisfied empty suit he has consistently proven himself to be. I donated to and supported Kucinich in the primaries and voted Green in November because I swore off the Dems after Gore walked away from Coup2K. So my point here is not being colored by any pro-Kerry or pro-Dem partisanship, believe me!
Just out of curiosity, does anyone have access to the data for the Florida exit polls in 2000 x the *final* (NORC) count for Gore and Bush? I think that would be very interesting to see, don’t you all?
Wilbur, re: “I just read a very interesting analysis that suggested that exit polling was very accurate starting in the 1970s up through the introduction of electronic voting in the 1990s. Does anybody know if this is true.” The article that makes that claim provides no substantiation. In fact, if you read the citation provided, it suggests something quite different.
I’m not a statistician, but aren’t tinyurl.com/4wk8h and tinyurl.com/4r4nn relevant to the differential non-response hypothesis?
Simon and Baimon on Exit Polls
Freepress.org recently published a paper by Jonathan D. Simon, J.D., and Ron P. Baiman, Ph.D., with the title, “The 2004 Presidential Election: Who Won The Popular Vote? An Examination of the Comparative Validity of Exit Poll and Vote Count Data.”
Exit poll investigations — Finale
I never did know whether this subject was an interest to readers or just to math-geek me but if you missed it, there were two posts here which, in conjunction with Goose3five of Comments from Left Field and a guest
No, Gore did not get more votes in Florida in 2000. All, or at least most, of the recounts, including that of the media, showed Bush with more VOTES. What may be true is that more people INTENDED TO VOTE for Gore. We will never know.
But the problem with at least one of the Gore requested recounts is that instead of counting votes, they were counting presumed intent to vote. But only doing it in the one county – which is why the U.S. Supreme Court (7-2 I believe, including Democrat Breyer) held that to violate Equal Protection. In other words, you can’t have one county discerning intent to vote, while all the others discern actual legal votes.
As to voter fraud, several posters have posited slight voter fraud in a large number of precincts. The problem with that is that systemic voter fraud would require some participation by those running the elections. First, many of the precincts where there was a discrepancy between exit polls and actual voting were Democratic run. Presumably, they would have been on the lookout for Republican shenangins. Similarly, the states where the polls were off the most were split between the parties. Secondly, there are similar discrepancies in both Bush counties and in Kerry counties, even if there weren’t that many Bush voters in the later.
I find it silly to presume that since the exit polls differ from the vote totals, that the vote totals must be wrong. It beggers belief that people apparently with graduate level statistics under their belts could honestly believe this.
What must be remembered is that in a survey of 100 people, a 1.9% miss rate really means that two people were wrongly polled. Compare this to a state with 1,000,000 voters. A similar error rate would require 19,000 fraudulent votes. Of course, in Ohio, we have some 120,000 votes that would have to have been fraudulent, and even more in Florida.
In other words, it is far, far, easier to skew exit poll results than vote totals, if for no other reason than the difference in magnitude in the number of participants involved. Add to this, of course, that skewing exit polling results is legal, whereas voting fraud is a crime.
Finally, let me add to the original article. The demographics shifted a bit in this election on who voted for whom. Some traditional Democratic voters voted for Bush, and visa versa. I know many too many on both sides here to disbelieve the statistics we have seen on this. But age, occupation, income, sex, ethinicity, etc. all effect a number of factors that have an affect on who votes when and whether they will respond to an exit poll. It will be interesting to see the results, if we ever do, of analysis of where the discrepancy occurred. Could it have been because more Bush voters voted later because they were working? It would especially be interesting to see the discrepancy as a function of time when the polls were taken. Also, was it in some groups more than others?
Even if it were true that exit polls were becoming less accurate over the course of the last 15 years, while having been more accurate back in the 70s and 80s, we would have to ask ourselves what that actually told us.
After all, the elections of the 70s and 80s in the US were pretty easy to predict. The Ford/Carter race was the closest of the period, but in the aftermath of Watergate, it seems hard to imagine Ford winning.
The 90s brought a fractured electorate with two elections where the winner took less than 50% of the vote and a viable 3rd party candidate stripped votes from both Republicans and Democrats.
In 2000 and 2004, the major parties have vied over a very, very closely divided electorate.
Furthermore, increased use of technology by national political campaigns has allowed them to go after more and more granular demographics, while the fracturing of old voting blocks due to economic and geographic mobility has made large blocks of voters more scarce.
It makes sense that this would complicate the voting landscape and could well drive up the margins of error on exit polls.
Hmmmm.
The exit polls were driven, and paid by, the MSM who are no friends of either Republicans nor Conservatives. Is there any surprise that the exit polls would then favor Kerry?
I’ve been voting for over twenty years now and I’ve disregarded exit polls for that entire time. I figure they’re just paid bullshit used to jigger a victory by Democrats. Much like the “mistake” by the networks in the early Gore win of Florida in 2000 before polls even closed in the panhandle.
Is anyone really surprised by any of this? Most Conservative/Republicans were predicting the exit polls would favor Kerry weeks prior to the election. This isn’t news. It’s not even olds.
Bruce, yes, exit polls are far easier to skew than actual elections.. but the latter is so much more rewarding!
No, when exit polls and election outcome differ, that does not necessarily imply that the polls were right. But the contradiction between the best-funded poll in the world and the outcome of the election made people wonder. Subsequently a myriad of strange incidents were found. Simple and compelling was the good dr. Lange’s count of absentee votes vs. ditto voters in Trumbull County, Ohio. He found on average 5.5 phantom absentee votes per precinct, which extrapolates to some 60000 votes statewide. (After Lange’s voter books were locked down as a consequence of the Ohio recount. One would expect that a solid check of the election would actually include an audit of voter books, cross checks and so on, but apparently not so. The authorities restricted the recount to its bare minimum, and one wonders why.
When the recount was over, dr. Lange resumed his efforts and found a quite similar result for Cuyahoga County (Cleveland) as he did in Trumbull County. This is just one of the many effects, just to show that little things may mean a lot.
Does all of this prove conclusively that fraud occurred? Of course not. But then, thus far, no one has been in a position to properly investigate for instance the Ohio election: the registration process, the purging of voter registers, the pre-challenging voter registrations, the rejection of provisional ballots, the reasons for the lack of voting machines in mainly Democratic precincts and so on. I have yet to find a picture of a long line of affluent white voters, and I’ve seen plenty of the opposite.
Point is, by any standard of modern democracy, this is not the way to conduct free and fair elections. The integrity of the election is the issue. The way in which the U.S. election process has itself become subject to political and legal struggle, rather than being about that individual citizen in the polling booth making his or his choice (trying to paraphase Churchill here) frankly frightens me.
Hmmm.
Now this is funny.
“He found on average 5.5 phantom absentee votes per precinct, which extrapolates to some 60000 votes statewide.”
Ok. Sooooo. If you’ve got an accurate average, then why do you need to extrapolate? So how did that person compute this “average”? By reviewing each precinct? But then there’s no need for extrapolating at all since the accurate number would be available. But if you’re having to extrapolate then that indicates the average itself is based on a insubstantial dataset.
(include witty repartee describing ridiculous “extrapolation” of insufficient data *here*)
Sjerp comment referred to the extrapolation of one county to the state, if I read him correctly.
Alex, thank you for clarifying. Lange is apparently a concerned citizen, who, all by himself went and counted these figures for one county, and later (after a lockdown) for another. When extrapolated one arrives at a figure around 60000 statewide. Just to show how large discrepancies can arise from seemingly innocent differences on the precinct level. Lange’s efforts are now well documented in the Conyers’report. The actual difference between votes vs voters statewide
My point was cross checking votes vs. voters (for instance) should be a check the relevant authorities do as part of the vote count, and when discrepancies are found, they should be investigated.
And Rick (see post above), I have looked around the Internet a bit, but could find no serious overview of exit polls vs. election results in Europe. Freeman, though, in his latest paper (dated 29 december 2004, see page 8) available widely on the Internet has data on the last three national elections in Germany. The predictive value is at least in these cases extremely high.
(Should you or anyone else be interested in pursuing this further, visit http://www.ebu.org – the European Broadcasting Union which is a loose federation of major networks – go to the member list, visit sites of members in established democracies, look for an e-mail address, ask which institute does their election polling, and mail these insitutes with your specific questions.)
Simon and Baiman on Exit Polls
Freepress.org recently published a paper by Jonathan D. Simon, J.D., and Ron P. Baiman, Ph.D., with the title, “The 2004 Presidential Election: Who Won The Popular Vote? An Examination of the Comparative Validity of Exit Poll and Vote Count Data.”…
Critique of Simon/Baiman Exit Poll Paper
I’m reviving two posts from the old SCO related to my work on exit polls. I wrote this post in response to a paper by Jonathan D. Simon, J.D., and Ron P. Baiman, Ph.D., with the title, “The 2004 Presidential…
What about selection bias (by the interviewer as well as higher up)? The one exit poll I was ever at was conducted by a college-age man; of course someone like this would prefer to interview single, professional women (tending to support Kerry) — I did see somewhere that 58% of the respondents on the exit poll were women.
Also, I *thought* the exit pollsters did not report just a raw total but a modeled total (not sure it’s called that) — and compare the result to say the previous election in the same precinct. But what I saw on Drudge this year looked like raw totals and nothing more.
For instance, the collar counties of Philadelphia went for Kerry; prior to 1988 those were Republican-leaning counties. Are the pollsters using out-of-date models (if in fact they are modeling the data)?
Also, have the exit pollsters always released totals (to the public)? As I recall, back in the day, all that ever got released was a winner. Each of the major networks had the same data, and they pretended they didn’t….although each network had a different risk level…so if the exit poll showed Carter 51% and Ford 49% in Illinois, CBS might say “Illinois votes for Carter” (as I recall, they actually did this), while ABC might say “Illinois too close to call.” The actual exit poll would not be released, though.